Learning to Optimize Under Non-Stationarity
نویسندگان
چکیده
منابع مشابه
Learning under Non-stationarity: Covariate Shift Adaptation by Importance Weighting
The goal of supervised learning is to estimate an underlying input-output function from its input-output training samples so that output values for unseen test input points can be predicted. A common assumption in supervised learning is that the training input points follow the same probability distribution as the test input points. However, this assumption is not satisfied, for example, when o...
متن کاملLearning under Non-Stationarity: Covariate Shift and Class-Balance Change
One of the fundamental assumptions behind many supervised machine learning algorithms is that training and test data follow the same probability distribution. However, this important assumption is often violated in practice, for example, because of an unavoidable sample selection bias or non-stationarity of the environment. Due to violation of the assumption, standard machine learning methods s...
متن کاملTwo Projection Pursuit Algorithms for Machine Learning under Non-Stationarity
Acknowledgements I am most grateful to Professor K.-R.Müller for having provided me the opportunity, not only to gain research experience in his IDA/Machine Learning Laboratory but in addition for employing me to do so. Without his assistance I would not have been able to complete my degree at the BCCN and continue to the PhD level in a dignified manner. I have greatly appreciated having been t...
متن کاملLearning to Optimize
How does a boundedly rational optimizing agent make decisions? Can such an agent learn to behave rationally? We address these questions in a standard regulator environment. Our behavioral primitive is anchored to the shadow price of the state vector. The regulator forecasts the value of an additional unit of the state tomorrow, and uses this forecast to choose her control. The value of the cont...
متن کاملLearning to Optimize
Algorithm design is a laborious process and often requires many iterations of ideation and validation. In this paper, we explore automating algorithm design and present a method to learn an optimization algorithm. We approach this problem from a reinforcement learning perspective and represent any particular optimization algorithm as a policy. We learn an optimization algorithm using guided pol...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: SSRN Electronic Journal
سال: 2018
ISSN: 1556-5068
DOI: 10.2139/ssrn.3261050